i worry about that with relationship to our energy grid. but i read the guys who want to stop the a.i. smart enough to create programs to outsmart the programs from those from a lot of money? i think it is possible. >> i may have misunderstood but you are looking for two types of safeguards? >> the a.i. community this is about interviews with a.i. developers. we want safeguards. >> but by definition the similarities are rapidly expanding intelligence? >> once you get the super intelligence yes. there is a group the machine intelligence research the is trying to have safe a.i. that those programs that creates the lead dna. that tries to learn lessons from the industrial process built in from the inception. right now and advanced cognitive architecture but we will put a condom on its. but you have to start from scratch. there is a book called normal accidents about the development in to learn a lesson for labatt -- from that and gets $20,000 in donations per year with the digit long dash budget of 50 billion. >> when is it apparent these mach